328 research outputs found

    Moderators of the effect of psychological interventions on depression and anxiety in cardiac surgery patients: A systematic review and meta-analysis

    Get PDF
    Cardiac surgery patients may be provided with psychological interventions to counteract depression and anxiety associated with surgical procedures. This systematic review and meta-analysis investigated whether intervention efficacy was impacted by type of cardiac procedure/cardiac event; control condition content; intervention duration; intervention timing; facilitator type; and risk of bias. MEDLINE, EMBASE, and PsycINFO were searched for randomized controlled trials comparing anxiety and depression outcomes, pre and post psychological and cardiac interventions. Twenty-four studies met the inclusion criteria for the systematic review (N = 2718) and 16 of those were meta-analysed (N = 1928). Depression and anxiety outcomes were reduced more in interventions that lasted longer, were delivered post-surgery, and by trained health professionals. Depression (but not anxiety) was reduced more when the experimental intervention was compared to an ‘alternative’ intervention, and when the intervention was delivered to coronary artery bypass graft patients. Anxiety (but not depression) was decreased more when interventions were delivered to implantable cardioverter defibrillator patients, and were of ‘high’ or ‘unclear’ risk of bias. In addition to estimating efficacy, future work in this domain needs to take into account the moderating effects of intervention, sample, and study characteristics

    Cars, CONSORT 2010, and Clinical Practice

    Get PDF
    Just like you would not buy a car without key information such as service history, you would not "buy" a clinical trial report without key information such as concealment of allocation. Implementation of the updated CONSORT 2010 statement enables the reader to see exactly what was done in a trial, to whom and when. A fully "CONSORTed" trial report does not necessarily mean the trial is a good one, but at least the reader can make a judgement. Clear reporting is a pre-requisite for judgement of study quality. The CONSORT statement evolves as empirical research moves on. CONSORT 2010 is even clearer than before and includes some new items with a particular emphasis on selective reporting of outcomes. The challenge is for everyone to use it

    The challenges of analyzing behavioral response study data : an overview of the MOCHA (Multi-study OCean acoustics Human effects Analysis) project

    Get PDF
    Date of Acceptance:This paper describes the MOCHA project which aims to develop novel approaches for the analysis of data collected during Behavioral Response Studies (BRSs). BRSs are experiments aimed at directly quantifying the effects of controlled dosages of natural or anthropogenic stimuli (typically sound) on marine mammal behavior. These experiments typically result in low sample size, relative to variability, and so we are looking at a number of studies in combination to maximize the gain from each one. We describe a suite of analytical tools applied to BRS data on beaked whales, including a simulation study aimed at informing future experimental design.Postprin

    Extent, Awareness and Perception of Dissemination Bias in Qualitative Research: An Explorative Survey

    Get PDF
    BACKGROUND: Qualitative research findings are increasingly used to inform decision-making. Research has indicated that not all quantitative research on the effects of interventions is disseminated or published. The extent to which qualitative researchers also systematically underreport or fail to publish certain types of research findings, and the impact this may have, has received little attention. METHODS: A survey was delivered online to gather data regarding non-dissemination and dissemination bias in qualitative research. We invited relevant stakeholders through our professional networks, authors of qualitative research identified through a systematic literature search, and further via snowball sampling. RESULTS: 1032 people took part in the survey of whom 859 participants identified as researchers, 133 as editors and 682 as peer reviewers. 68.1% of the researchers said that they had conducted at least one qualitative study that they had not published in a peer-reviewed journal. The main reasons for non-dissemination were that a publication was still intended (35.7%), resource constraints (35.4%), and that the authors gave up after the paper was rejected by one or more journals (32.5%). A majority of the editors and peer reviewers "(strongly) agreed" that the main reasons for rejecting a manuscript of a qualitative study were inadequate study quality (59.5%; 68.5%) and inadequate reporting quality (59.1%; 57.5%). Of 800 respondents, 83.1% "(strongly) agreed" that non-dissemination and possible resulting dissemination bias might undermine the willingness of funders to support qualitative research. 72.6% and 71.2%, respectively, "(strongly) agreed" that non-dissemination might lead to inappropriate health policy and health care. CONCLUSIONS: The proportion of non-dissemination in qualitative research is substantial. Researchers, editors and peer reviewers play an important role in this. Non-dissemination and resulting dissemination bias may impact on health care research, practice and policy. More detailed investigations on patterns and causes of the non-dissemination of qualitative research are needed

    Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review

    Get PDF
    Background: Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. Methods: We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. Results: For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Conclusions: Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses

    Completeness and Changes in Registered Data and Reporting Bias of Randomized Controlled Trials in ICMJE Journals after Trial Registration Policy

    Get PDF
    We assessed the adequacy of randomized controlled trial (RCT) registration, changes to registration data and reporting completeness for articles in ICMJE journals during 2.5 years after registration requirement policy.For a set of 149 reports of 152 RCTs with ClinicalTrials.gov registration number, published from September 2005 to April 2008, we evaluated the completeness of 9 items from WHO 20-item Minimum Data Set relevant for assessing trial quality. We also assessed changes to the registration elements at the Archive site of ClinicalTrials.gov and compared published and registry data.RCTs were mostly registered before 13 September 2005 deadline (n = 101, 66.4%); 118 (77.6%) started recruitment before and 31 (20.4%) after registration. At the time of registration, 152 RCTs had a total of 224 missing registry fields, most commonly 'Key secondary outcomes' (44.1% RCTs) and 'Primary outcome' (38.8%). More RCTs with post-registration recruitment had missing Minimum Data Set items than RCTs with pre-registration recruitment: 57/118 (48.3%) vs. 24/31 (77.4%) (χ(2) (1) = 7.255, P = 0.007). Major changes in the data entries were found for 31 (25.2%) RCTs. The number of RCTs with differences between registered and published data ranged from 21 (13.8%) for Study type to 118 (77.6%) for Target sample size.ICMJE journals published RCTs with proper registration but the registration data were often not adequate, underwent substantial changes in the registry over time and differed in registered and published data. Editors need to establish quality control procedures in the journals so that they continue to contribute to the increased transparency of clinical trials

    The use of clinical study reports to enhance the quality of systematic reviews: a survey of systematic review authors

    Get PDF
    Background: Clinical study reports (CSRs) are produced for marketing authorisation applications. They often contain considerably more information about, and data from, clinical trials than corresponding journal publications. Use of data from CSRs might help circumvent reporting bias, but many researchers appear to be unaware of their existence or potential value. Our survey aimed to gain insight into the level of familiarity, understanding and use of CSRs, and to raise awareness of their potential within the systematic review community. We also aimed to explore the potential barriers faced when obtaining and using CSRs in systematic reviews. Methods: Online survey of systematic reviewers who (i) had requested or used CSRs, (ii) had considered but not used CSRs and (iii) had not considered using CSRs was conducted. Cochrane reviewers were contacted twice via the Cochrane monthly digest. Non-Cochrane reviewers were reached via journal and other website postings. Results: One hundred sixty respondents answered an open invitation and completed the questionnaire; 20/ 160 (13%) had previously requested or used CSRs and other regulatory documents, 7/160 (4%) had considered but not used CSRs and 133/160 (83%) had never considered this data source. Survey respondents mainly sought data from the European Medicines Agency (EMA) and/or the Food and Drug Administration (FDA). Motivation for using CSRs stemmed mainly from concerns about reporting bias 11/20 (55%), specifically outcome reporting bias 11/20 (55%) and publication bias 5/20 (25%). The barriers to using CSRs noted by all types of respondents included current limited access to these documents (43 respondents), the time and resources needed to obtain and include these data in evidence syntheses (n = 25) and lack of guidance about how to use these sources in systematic reviews (n = 26). Conclusions: Most respondents (irrespective of whether they had previously used them) agreed that access to CSRs is important, and suggest that further guidance on how to use and include these data would help to promote their use in future systematic reviews. Most respondents who received CSRs considered them to be valuable in their systematic review and/or meta-analysis

    Why prospective registration of systematic reviews makes sense

    Get PDF
    Prospective registration of systematic reviews promotes transparency, helps reduce potential for bias and serves to avoid unintended duplication of reviews. Registration offers advantages to many stakeholders in return for modest additional effort from the researchers registering their reviews

    Publication Delay of Randomized Trials on 2009 Influenza A (H1N1) Vaccination

    Get PDF
    Background: Randomized evidence for vaccine immunogenicity and safety is urgently needed in the setting of pandemics with new emerging infectious agents. We carried out an observational survey to evaluate how many randomized controlled trials testing 2009 H1N1 vaccines were published among those registered, and what was the time lag from their start to publication and from their completion to publication. Methods: PubMed, EMBASE and 9 clinical trial registries were searched for eligible randomized controlled trials. The units of the analysis were single randomized trials on any individual receiving influenza vaccines in any setting. Results: 73 eligible trials were identified that had been registered in 2009–2010. By June 30, 2011 only 21 (29%) of these trials had been published, representing 38 % of the randomized sample size (19905 of 52765). Trials starting later were published less rapidly (hazard ratio 0.42 per month; 95 % Confidence Interval: 0.27 to 0.64; p,0.001). Similarly, trials completed later were published less rapidly (hazard ratio 0.43 per month; 95 % CI: 0.27 to 0.67; p,0.001). Randomized controlled trials were completed promptly (median, 5 months from start to completion), but only a minority were subsequently published. Conclusions: Most registered randomized trials on vaccines for the H1N1 pandemic are not published in the peer-reviewe
    corecore